Goto

Collaborating Authors

 perturbation stability





Do Wider Neural Networks Really Help Adversarial Robustness?

Neural Information Processing Systems

Adversarial training is a powerful type of defense against adversarial examples. Previous empirical results suggest that adversarial training requires wider networks for better performances. However, it remains elusive how does neural network width affect model robustness. In this paper, we carefully examine the relationship between network width and model robustness. Specifically, we show that the model robustness is closely related to the tradeoff between natural accuracy and perturbation stability, which is controlled by the robust regularization parameter λ.



Appendix introduction

Neural Information Processing Systems

The Appendix is organized as follows: In Appendix A, we state the symbols and notation used in this paper. In Appendix B, we provide the proofs and related lemmas of Theorem 1. In Appendix C, we provide the proofs of Theorem 2. In Appendix D, we provide the proofs and related lemmas of Theorem 3. In Appendix F, we discuss several limitations of this work. Finally, in Appendix G, we discuss the societal impact of this paper. In the paper, vectors are indicated with bold small letters, matrices with bold capital letters.


Robustness in deep learning: The good (width), the bad (depth), and the ugly (initialization)

Neural Information Processing Systems

A plethora of aspects on the robustness have been studied, ranging from algorithms to their initialization as well as from the width of neural networks to their depth (i.e., the architecture).


A Proof of Lemma 4.2 554 Lemma A.1 (Restatement of Lemma 4.2)

Neural Information Processing Systems

Lemma A.5 of [ 19 ] we have By substituting ( A.5) into ( A.1) we have, All experiments are conducted on a single NVIDIA V100. It runs on the GNU Linux Debian 4.9 operating The experiment is implemented via PyTorch 1.6.0. This makes the learning problem of CIFAR100 much harder. To demonstrate the fact that the over-fitting problem all comes from perturbation stability in Section 3.2(3), we We found this schedule is the most effective one when only training on the original CIFAR10. In this part, we provide a complete visualization for the two parts in Eqn. We test WideResNet-34 on CIFAR10 and CIFAR10.



Do Wider Neural Networks Really Help Adversarial Robustness?

Neural Information Processing Systems

Adversarial training is a powerful type of defense against adversarial examples. Previous empirical results suggest that adversarial training requires wider networks for better performances. However, it remains elusive how does neural network width affect model robustness. In this paper, we carefully examine the relationship between network width and model robustness. Specifically, we show that the model robustness is closely related to the tradeoff between natural accuracy and perturbation stability, which is controlled by the robust regularization parameter λ.